Emma Charlton, Formative Content

How should senior communications leaders juggle the opportunities and challenges that are emerging right now?

That was the central question at the Page event in London, hosted by UK cybersecurity company Darktrace. There was a mix of excitement and caution around artificial intelligence (AI) and its implications for ethics, work, communications and culture.

The role of AI in corporate communications and in cybersecurity was the focus of the session, hosted by Carolyn Esser, Chief Corporate Affairs Officer and Ben Lyons, Head of Policy and Public Affairs, at Darktrace.

Technology can play many different roles and has enormous potential to help create rapid, personalised content. The group considered examples including OpenAI's Sora text-to-video product and discussed how AI can help develop press releases, target content in a more personalised way and improve media monitoring.

The event took place against a backdrop where AI and technologies are reshaping our lives and our industries. Page members are weighing AI’s capabilities and potential uses against the risks and threats – including a deluge of misinformation. Safety and innovation can go hand in hand, according to Ben Lyons, if the proper guardrails are in place.

Practical applications of AI

Companies should focus on going after the opportunities for implementation, while mitigating the risks that arise. As we move from experimentation to application, it will become easier to unearth areas where the technology can be most useful. In corporate communications, many practical applications are already evident, particularly around collating, organising and presenting data.

Many organisations are developing their own AI platforms – generative tools that can streamline search and synthesis. These tools can be used by communications and planning teams to generate insights, data and project plans. Communicators can also use AI for data analysis, market research, report generation and crafting presentations.

From experimentation to application 

In communications, AI can be used for repetitive design and production tasks, enabling team members to shift more focus onto strategic communications. 

Some tools can speed up production significantly, allowing for quick execution and iterations in multiple languages. While it’s not yet sufficiently trustworthy to create and distribute content from scratch, AI was likened to a sharp kitchen knife – something that can be very useful when handled by a properly trained person. 

The strengths of the tools lie in analysing and making sense of large data sets, identifying patterns, sorting information and drafting outlines. There is much scope to use these tools, including gleaning early insights into any business issues or analysing crises for communications professionals. 

In short, scope was seen for AI to provide insights and efficiencies when putting together sentiment analysis, content summarisation and research. 

False narratives and reputation

Even so, delegates showed an acute awareness of the risks, particularly around data security, privacy and misinformation.

False information and narratives that target brands or an organisation’s reputation were a key concern, as well as plagiarism, copyright issues with AI-generated content, and the need for industry standards and guidelines.

The challenges posed by global regulations, especially for companies operating across multiple jurisdictions, were a key focus. As AI and data privacy legislation evolves, multinational companies face complex challenges in responding and complying to a patchwork of regulations.

The landscape is constantly evolving, with different governments taking different approaches. European Union members have reached a deal on an AI Act, which aims to set a global standard for AI regulation, in a similar way that their GDPR legislation set the tone for privacy and data protection.

China has also been implementing regulations for AI, with rules on recommendation algorithms and generative AI already in force, and separate regional legislation to govern use of the technology.

The US has so far taken a less regulatory approach, seeking voluntary commitments from technology companies alongside a White House Executive Order that requires the companies building the biggest models to share information with the Government. Efforts are being made to enhance AI safety, privacy protection and prevent discrimination with executive orders and collaboration with international partners.

Navigating regulation and risk

Achieving a unified global AI regulation framework is likely to face challenges, given the different opinions, values and approaches that are already evident. The risk for companies is that this results in fragmentation and complexity. 

Delegates also discussed the importance of ensuring a diversity of voices in the governance of AI, suggesting that those building and developing AI tools should not be the only voices in the legislative process.

The UK hosted the world's first global AI safety summit in November 2023, aiming to foster dialogue on AI safety and establish itself as a key player in the debate. Prime Minister Rishi Sunak emphasised the importance of addressing the threats and tried to position the UK as a hub for AI safety. 

The UK’s summit also emphasised the importance of cybersecurity in achieving AI safety.

Understanding the role that cybersecurity can play helps to focus minds on the near-term risks posed by AI adoption, as well as the longer-term ones, according to Ben Lyons of Darktrace. 

In cybersecurity, AI is transforming traditional approaches by understanding organisations through their data and detecting anomalies quickly. This allows for faster detection, response and resolution of cyber incidents – crucial in the face of increasing AI-driven attacks. It can also be of benefit to corporate communicators, who need to understand and respond rapidly in times of crisis.

Building trust in communications

For communicators, much of the AI adoption debate centres on trust and understanding. Within many organisations there is a lack of faith in emerging AI tools and a need for education and training to foster a deeper understanding of the potential uses and risks.

Internal leadership support for the use of AI in communications is essential, as is focusing on areas where your company has expertise, like cybersecurity, the speakers said. And – as ever – it’s important to remain agile, with an emphasis on testing tools fast, assessing the potential uses and risks, and moving on as the technology evolves.

Companies need to work hard to align their AI policy objectives with their communication strategies and partnering with governments and regulators can be useful.

As technology and AI continue to reshape every aspect of our lives, this Page event underscored the need to balance innovation and safety. Practical applications of AI are already being adopted at many organisations, particularly in data collation, organisation and presentation. 

The path forward lies in embracing AI's opportunities while carefully navigating its potential risks, ensuring that the proper safeguards are in place for a future where technology can supercharge communications and help us all to achieve our objectives.